155 research outputs found

    Learning Timbre Analogies from Unlabelled Data by Multivariate Tree Regression

    Get PDF
    This is the Author's Original Manuscript of an article whose final and definitive form, the Version of Record, has been published in the Journal of New Music Research, November 2011, copyright Taylor & Francis. The published article is available online at http://www.tandfonline.com/10.1080/09298215.2011.596938

    Anti-Trust and Economic Theory: Some Observations from the US Experience

    Get PDF
    Recent developments in US anti-trust can be characterised as reflecting the uneasy interaction of two quite separate phenomena: first, the increased emphasis on economic analysis as the overriding organising principle of anti-trust policy and on economic efficiency as the primary (perhaps only) relevant goal for anti-trust; second, the long-standing reluctance of the federal judiciary to involve itself in any substantive economic analysis, and the preference, instead, for simple rules of thumb or ‘pigeon holes’ to sort out lawful from unlawful conduct. The result has been that while economics has played a major role, it has not influenced American anti-trust as thoroughly or as uniformly as might have been imagined; rather the extent and the nature of its influence have depended on the degree to which the relevant economics could be reduced to the kind of simple rules or pigeon holes that the judiciary favours. The present paper will illustrate that theme, first by reporting on the two developments separately and then by illustrating their joint influence with reference to two important areas of American anti-trust: predatory conduct and so-called vertical restraints. Finally, a contrast will be made between judicial development in those two areas and recent American merger policy which, it is argued, is carried out largely independently of the judiciary, and hence the opportunities for economics to influence the process are less inhibited by the judicial reluctance to undertake extensive economic analysis

    A novel feature selection-based sequential ensemble learning method for class noise detection in high-dimensional data

    Get PDF
    © 2018, Springer Nature Switzerland AG. Most of the irrelevant or noise features in high-dimensional data present significant challenges to high-dimensional mislabeled instances detection methods based on feature selection. Traditional methods often perform the two dependent step: The first step, searching for the relevant subspace, and the second step, using the feature subspace which obtained in the previous step training model. However, Feature subspace that are not related to noise scores and influence detection performance. In this paper, we propose a novel sequential ensemble method SENF that aggregate the above two phases, our method learns the sequential ensembles to obtain refine feature subspace and improve detection accuracy by iterative sparse modeling with noise scores as the regression target attribute. Through extensive experiments on 8 real-world high-dimensional datasets from the UCI machine learning repository [3], we show that SENF performs significantly better or at least similar to the individual baselines as well as the existing state-of-the-art label noise detection method

    OWA-FRPS: A Prototype Selection method based on Ordered Weighted Average Fuzzy Rough Set Theory

    Get PDF
    The Nearest Neighbor (NN) algorithm is a well-known and effective classification algorithm. Prototype Selection (PS), which provides NN with a good training set to pick its neighbors from, is an important topic as NN is highly susceptible to noisy data. Accurate state-of-the-art PS methods are generally slow, which motivates us to propose a new PS method, called OWA-FRPS. Based on the Ordered Weighted Average (OWA) fuzzy rough set model, we express the quality of instances, and use a wrapper approach to decide which instances to select. An experimental evaluation shows that OWA-FRPS is significantly more accurate than state-of-the-art PS methods without requiring a high computational cost.Spanish Government TIN2011-2848

    Collusion through Joint R&D: An Empirical Assessment

    Get PDF
    This paper tests whether upstream R&D cooperation leads to downstream collusion. We consider an oligopolistic setting where firms enter in research joint ventures (RJVs) to lower production costs or coordinate on collusion in the product market. We show that a sufficient condition for identifying collusive behavior is a decline in the market share of RJV-participating firms, which is also necessary and sufficient for a decrease in consumer welfare. Using information from the US National Cooperation Research Act, we estimate a market share equation correcting for the endogeneity of RJV participation and R&D expenditures. We find robust evidence that large networks between direct competitors – created through firms being members in several RJVs at the same time – are conducive to collusive outcomes in the product market which reduce consumer welfare. By contrast, RJVs among non-competitors are efficiency enhancing

    Finding Anomalous Periodic Time Series: An Application to Catalogs of Periodic Variable Stars

    Full text link
    Catalogs of periodic variable stars contain large numbers of periodic light-curves (photometric time series data from the astrophysics domain). Separating anomalous objects from well-known classes is an important step towards the discovery of new classes of astronomical objects. Most anomaly detection methods for time series data assume either a single continuous time series or a set of time series whose periods are aligned. Light-curve data precludes the use of these methods as the periods of any given pair of light-curves may be out of sync. One may use an existing anomaly detection method if, prior to similarity calculation, one performs the costly act of aligning two light-curves, an operation that scales poorly to massive data sets. This paper presents PCAD, an unsupervised anomaly detection method for large sets of unsynchronized periodic time-series data, that outputs a ranked list of both global and local anomalies. It calculates its anomaly score for each light-curve in relation to a set of centroids produced by a modified k-means clustering algorithm. Our method is able to scale to large data sets through the use of sampling. We validate our method on both light-curve data and other time series data sets. We demonstrate its effectiveness at finding known anomalies, and discuss the effect of sample size and number of centroids on our results. We compare our method to naive solutions and existing time series anomaly detection methods for unphased data, and show that PCAD's reported anomalies are comparable to or better than all other methods. Finally, astrophysicists on our team have verified that PCAD finds true anomalies that might be indicative of novel astrophysical phenomena

    Oblique decision trees for spatial pattern detection: optimal algorithm and application to malaria risk

    Get PDF
    BACKGROUND: In order to detect potential disease clusters where a putative source cannot be specified, classical procedures scan the geographical area with circular windows through a specified grid imposed to the map. However, the choice of the windows' shapes, sizes and centers is critical and different choices may not provide exactly the same results. The aim of our work was to use an Oblique Decision Tree model (ODT) which provides potential clusters without pre-specifying shapes, sizes or centers. For this purpose, we have developed an ODT-algorithm to find an oblique partition of the space defined by the geographic coordinates. METHODS: ODT is based on the classification and regression tree (CART). As CART finds out rectangular partitions of the covariate space, ODT provides oblique partitions maximizing the interclass variance of the independent variable. Since it is a NP-Hard problem in R(N), classical ODT-algorithms use evolutionary procedures or heuristics. We have developed an optimal ODT-algorithm in R(2), based on the directions defined by each couple of point locations. This partition provided potential clusters which can be tested with Monte-Carlo inference. We applied the ODT-model to a dataset in order to identify potential high risk clusters of malaria in a village in Western Africa during the dry season. The ODT results were compared with those of the Kulldorff' s SaTScanℱ. RESULTS: The ODT procedure provided four classes of risk of infection. In the first high risk class 60%, 95% confidence interval (CI95%) [52.22–67.55], of the children was infected. Monte-Carlo inference showed that the spatial pattern issued from the ODT-model was significant (p < 0.0001). Satscan results yielded one significant cluster where the risk of disease was high with an infectious rate of 54.21%, CI95% [47.51–60.75]. Obviously, his center was located within the first high risk ODT class. Both procedures provided similar results identifying a high risk cluster in the western part of the village where a mosquito breeding point was located. CONCLUSION: ODT-models improve the classical scanning procedures by detecting potential disease clusters independently of any specification of the shapes, sizes or centers of the clusters
    • 

    corecore